宇宙学调查实验中的数据处理和分析管道引入了数据扰动,可以显着降低基于深度学习的模型的性能。鉴于加工和分析宇宙学调查数据的监督深度学习方法的增加,数据扰动效应的评估以及增加模型稳健性的方法的发展越来越重要。在星系形态分类的背景下,我们研究了扰动在成像数据中的影响。特别是,我们在基线数据培训和扰动数据测试时检查使用神经网络的后果。我们考虑与两个主要来源相关的扰动:1)通过泊松噪声和2)诸如图像压缩或望远镜误差的图像压缩或望远粉误差所产生的步骤所产生的数据处理噪声提高了观测噪声。我们还测试了域适应技术在减轻扰动驱动误差时的功效。我们使用分类准确性,潜在空间可视化和潜在空间距离来评估模型稳健性。如果没有域适应,我们发现处理像素级别错误容易将分类翻转成一个不正确的类,并且更高的观察噪声使得模型在低噪声数据上培训无法对Galaxy形态进行分类。另一方面,我们表明,具有域适应的培训改善了模型稳健性并减轻了这些扰动的影响,以更高的观测噪声的数据提高了23%的分类精度。域适应也增加了基线与错误分类的错误分类的潜在空间距离〜2.3的倍数距离,使模型更强大地扰动。
translated by 谷歌翻译
We present edBB-Demo, a demonstrator of an AI-powered research platform for student monitoring in remote education. The edBB platform aims to study the challenges associated to user recognition and behavior understanding in digital platforms. This platform has been developed for data collection, acquiring signals from a variety of sensors including keyboard, mouse, webcam, microphone, smartwatch, and an Electroencephalography band. The information captured from the sensors during the student sessions is modelled in a multimodal learning framework. The demonstrator includes: i) Biometric user authentication in an unsupervised environment; ii) Human action recognition based on remote video analysis; iii) Heart rate estimation from webcam video; and iv) Attention level estimation from facial expression analysis.
translated by 谷歌翻译
自动生物医学图像分析的领域至关重要地取决于算法验证的可靠和有意义的性能指标。但是,当前的度量使用通常是不明智的,并且不能反映基本的域名。在这里,我们提出了一个全面的框架,该框架指导研究人员以问题意识的方式选择绩效指标。具体而言,我们专注于生物医学图像分析问题,这些问题可以解释为图像,对象或像素级别的分类任务。该框架首先编译域兴趣 - 目标结构 - ,数据集和算法与输出问题相关的属性的属性与问题指纹相关,同时还将其映射到适当的问题类别,即图像级分类,语义分段,实例,实例细分或对象检测。然后,它指导用户选择和应用一组适当的验证指标的过程,同时使他们意识到与个人选择相关的潜在陷阱。在本文中,我们描述了指标重新加载推荐框架的当前状态,目的是从图像分析社区获得建设性的反馈。当前版本是在由60多个图像分析专家的国际联盟中开发的,将在社区驱动的优化之后公开作为用户友好的工具包提供。
translated by 谷歌翻译
我们寻求基于8,380临床验证样品的咳嗽声,评估Covid-19的快速初级筛查工具的检测性能,从8,380临床验证的样品进行实验室分子测试(2,339 Covid-19阳性和6,041个Covid-19负面)。根据患者的定量RT-PCR(QRT-PCR)分析,循环阈值和淋巴细胞计数,根据结果和严重程度临床标记样品。我们所提出的通用方法是一种基于经验模式分解(EMD)的算法,其随后基于音频特征的张量和具有称为Deplecough的卷积层的深层人工神经网络分类器的分类。基于张量尺寸的数量,即DepeCough2D和DeepCOUGH3D,两种不同版本的深度。这些方法已部署在多平台概念验证Web应用程序CoughDetect中以匿名管理此测试。 Covid-19识别结果率达到了98.800.83%,敏感性为96.431.85%的有前途的AUC(面积),特异性为96.201.74%,81.08%5.05%AUC,用于识别三个严重程度。我们提出的Web工具和支持稳健,快速,需要Covid-19的需求识别的基础算法有助于快速检测感染。我们认为,它有可能大大妨碍世界各地的Covid-19大流行。
translated by 谷歌翻译
在本文中,我们开发FaceQVEC,一种软件组件,用于估计ISO / IEC 19794-5中所考虑的每个要点的面部图像的符合性,这是一个质量标准,该标准定义了将它们可接受或不可接受的面部图像的一般质量指南用于官方文件,如护照或身份证。这种质量评估的工具可以有助于提高面部识别的准确性,并确定哪些因素影响给定的面部图像的质量,并采取行动消除或减少这些因素,例如,具有后处理技术或重新获取图像。 FaceQVEC由与上述标准中预期的不同点相关的25个单独测试的自动化,以及被认为与面部质量有关的图像的其他特征。我们首先包括在现实条件下捕获的开发数据集上评估的质量测试的结果。我们使用这些结果来调整每个测试的判定阈值。然后,我们再次在评估数据库中再次检查,该评估数据库包含在开发期间未见的新脸部图像。评估结果展示了个人测试的准确性,用于检查遵守ISO / IEC 19794-5。 Faceqvec可在线获取(https://github.com/uam-biometrics/faceqvec)。
translated by 谷歌翻译
深入学习模型在广泛的科学域中越来越多地采用,特别是处理高度维度和科学数据量。然而,由于它们的复杂性和过分分度化,这些模型往往是脆弱的,尤其是由于常见的图像处理而可能出现的无意的对抗扰动,例如通过真实的科学数据经常看到的压缩或模糊。了解这笔脆性并开发模型对这些对抗扰动的鲁棒性是至关重要的。为此,我们研究了观测噪声从曝光时间的影响,以及一个像素攻击的最坏情况场景作为压缩或望远镜误差的代理,以区分不同形态的星系LSST模拟数据。我们还探讨了域适应技术如何有助于改善这种自然发生的攻击,帮助科学家构建更可靠和稳定的模型。
translated by 谷歌翻译
This paper presents a machine learning approach to multidimensional item response theory (MIRT), a class of latent factor models that can be used to model and predict student performance from observed assessment data. Inspired by collaborative filtering, we define a general class of models that includes many MIRT models. We discuss the use of penalized joint maximum likelihood (JML) to estimate individual models and cross-validation to select the best performing model. This model evaluation process can be optimized using batching techniques, such that even sparse large-scale data can be analyzed efficiently. We illustrate our approach with simulated and real data, including an example from a massive open online course (MOOC). The high-dimensional model fit to this large and sparse dataset does not lend itself well to traditional methods of factor interpretation. By analogy to recommender-system applications, we propose an alternative "validation" of the factor model, using auxiliary information about the popularity of items consulted during an open-book exam in the course.
translated by 谷歌翻译
Real-world robotic grasping can be done robustly if a complete 3D Point Cloud Data (PCD) of an object is available. However, in practice, PCDs are often incomplete when objects are viewed from few and sparse viewpoints before the grasping action, leading to the generation of wrong or inaccurate grasp poses. We propose a novel grasping strategy, named 3DSGrasp, that predicts the missing geometry from the partial PCD to produce reliable grasp poses. Our proposed PCD completion network is a Transformer-based encoder-decoder network with an Offset-Attention layer. Our network is inherently invariant to the object pose and point's permutation, which generates PCDs that are geometrically consistent and completed properly. Experiments on a wide range of partial PCD show that 3DSGrasp outperforms the best state-of-the-art method on PCD completion tasks and largely improves the grasping success rate in real-world scenarios. The code and dataset will be made available upon acceptance.
translated by 谷歌翻译
Modelling and forecasting real-life human behaviour using online social media is an active endeavour of interest in politics, government, academia, and industry. Since its creation in 2006, Twitter has been proposed as a potential laboratory that could be used to gauge and predict social behaviour. During the last decade, the user base of Twitter has been growing and becoming more representative of the general population. Here we analyse this user base in the context of the 2021 Mexican Legislative Election. To do so, we use a dataset of 15 million election-related tweets in the six months preceding election day. We explore different election models that assign political preference to either the ruling parties or the opposition. We find that models using data with geographical attributes determine the results of the election with better precision and accuracy than conventional polling methods. These results demonstrate that analysis of public online data can outperform conventional polling methods, and that political analysis and general forecasting would likely benefit from incorporating such data in the immediate future. Moreover, the same Twitter dataset with geographical attributes is positively correlated with results from official census data on population and internet usage in Mexico. These findings suggest that we have reached a period in time when online activity, appropriately curated, can provide an accurate representation of offline behaviour.
translated by 谷歌翻译
Optical coherence tomography (OCT) captures cross-sectional data and is used for the screening, monitoring, and treatment planning of retinal diseases. Technological developments to increase the speed of acquisition often results in systems with a narrower spectral bandwidth, and hence a lower axial resolution. Traditionally, image-processing-based techniques have been utilized to reconstruct subsampled OCT data and more recently, deep-learning-based methods have been explored. In this study, we simulate reduced axial scan (A-scan) resolution by Gaussian windowing in the spectral domain and investigate the use of a learning-based approach for image feature reconstruction. In anticipation of the reduced resolution that accompanies wide-field OCT systems, we build upon super-resolution techniques to explore methods to better aid clinicians in their decision-making to improve patient outcomes, by reconstructing lost features using a pixel-to-pixel approach with an altered super-resolution generative adversarial network (SRGAN) architecture.
translated by 谷歌翻译